Multiagent Reinforcement Learning in Stochastic Games
نویسندگان
چکیده
We adopt stochastic games as a general framework for dynamic noncooperative systems. This framework provides a way of describing the dynamic interactions of agents in terms of individuals' Markov decision processes. By studying this framework, we go beyond the common practice in the study of learning in games, which primarily focus on repeated games or extensive-form games. For stochastic games with incomplete information, we design a multiagent reinforcement learning method which allows agents to learn Nash equilibrium strategies. We show in both theory and experiments that this algorithm converges. From the viewpoint of machine learning research, our work helps to establish the theoretical foundation for applying reinforcement learning, originally deened for single-agent systems, to multiagent systems.
منابع مشابه
Joint Learning in Stochastic Games: Playing Coordination Games Within Coalitions
Despite the progress in multiagent reinforcement learning via formalisms based on stochastic games, these have difficulties coping with a high number of agents due to the combinatorial explosion in the number of joint actions. One possible way to reduce the complexity of the problem is to let agents form groups of limited size so that the number of the joint actions is reduced. This paper inves...
متن کاملConvergence Problems of General-Sum Multiagent Reinforcement Learning
Stochastic games are a generalization of MDPs to multiple agents, and can be used as a framework for investigating multiagent learning. Hu and Wellman (1998) recently proposed a multiagent Q-learning method for general-sum stochastic games. In addition to describing the algorithm, they provide a proof that the method will converge to a Nash equilibrium for the game under specified conditions. T...
متن کاملAn Analysis of Stochastic Game Theory for Multiagent Reinforcement Learning
Learning behaviors in a multiagent environment is crucial for developing and adapting multiagent systems. Reinforcement learning techniques have addressed this problem for a single agent acting in a stationary environment, which is modeled as a Markov decision process (MDP). But, multiagent environments are inherently non-stationary since the other agents are free to change their behavior as th...
متن کاملMultiagent reinforcement learning: algorithm converging to Nash equilibrium in general-sum discounted stochastic games
Reinforcement learning turned out a technique that allowed robots to ride a bicycle, computers to play backgammon on the level of human world masters and solve such complicated tasks of high dimensionality as elevator dispatching. Can it come to rescue in the next generation of challenging problems like playing football or bidding on virtual markets? Reinforcement learning that provides a way o...
متن کاملMultiagent Q-Learning: Preliminary Study on Dominance between the Nash and Stackelberg Equilibriums
Some game theory approaches to solve multiagent reinforcement learning in self play, i.e. when agents use the same algorithm for choosing action, employ equilibriums, such as the Nash equilibrium, to compute the policies of the agents. These approaches have been applied only on simple examples. In this paper, we present an extended version of Nash Q-Learning using the Stackelberg equilibrium to...
متن کامل